Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dsl: Move SubDimension thickness evaluation to the thicknesses themselves #2470

Merged
merged 20 commits into from
Nov 13, 2024

Conversation

EdCaunt
Copy link
Contributor

@EdCaunt EdCaunt commented Oct 18, 2024

Fixes issues with standalone use of SubDimension.symbolic_min or SubDimension.symbolic_max in an operator. Previously variants on

x = Dimension('x')
ix = SubDimension.left('ix', x, 2)
f = Function(name="f", dimensions=(ix,), shape=(5,), dtype=np.int32)

eqns = Eq(f[ix.symbolic_max], 1)

op = Operator(eqns)
op(x_m=0)

would suffer from two issues:

  1. The SubDimension thickness (ix.symbolic_max) would not be concretised from x_ltkn to x_ltkn0
  2. The SubDimension thickness would never be given a value by _arg_values unless the user explicitly passed it to the operator (and even then, it would not be adjusted to reflect any MPI decomposition)

These are rectified by this PR.

Copy link
Contributor

@FabioLuporini FabioLuporini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a very nice clean up!

@@ -640,6 +642,11 @@ def _prepare_arguments(self, autotune=None, **kwargs):
for d in reversed(toposort):
args.update(d._arg_values(self._dspace[d], grid, **kwargs))

# Process SubDimensionThicknesses
for p in self.parameters:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be triggered from within SubDimension._arg_values itself

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the issue is that the SubDimension isn't necessarily available to the operator. You can craft legal Operators which only contain the thickness, not the parent subdimension, hence the reworking in this PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't it caught by self.input 's default line 487

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming "it" means the subdomain, you can craft a legal operator as in the MFE at the top of this PR where the Operator never sees the SubDimension, only its thickness symbols

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mean the subdomain no, I mean the thickness, you're saying you can create an operator where the Thickness (p here) is in op.parameters but is not in op.inputs?

If it's the case, i.e if it is not in inputs, then it should not need to be in args since it's not an input.
If it's not the case, the line 487 processes all inputs and should cover that case already

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried it, and this is not the case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is that possible? It's clearly in parameters since you added this so if you add is_Input and it doesn't catch it then something is broken

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no grid in the kwargs at the point where _arg_values is called on the inputs, which makes it no-op. It also needs to be done before objects get _arg_values called on them iirc, hence the current position of this loop.

You could reorder _prepare_arguments to process the grid earlier, but my functions-on-subdomains branch has a bunch of reworking here and fixing the merge conflicts will be a pain, so best to stick a pin in it and fix down the line imo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you open an issue about it then

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't understand this thing honestly. We need an issue and ideally a comment on top of this loop explaining what's going on and what should actually happen instead. You can do it in another branch as it's a minor thing

devito/types/dimension.py Outdated Show resolved Hide resolved
@@ -536,6 +535,81 @@ def _arg_check(self, *args, **kwargs):
# The Dimensions below are exposed in the user API. They can only be created by
# the user

class SubDimensionThickness(DataSymbol):
"""A DataSymbol to represent a thickness of a SubDimension"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

full stop

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought we didn't do those on single line docstrings?

devito/types/dimension.py Outdated Show resolved Hide resolved
devito/types/dimension.py Show resolved Hide resolved
Copy link

codecov bot commented Oct 18, 2024

Codecov Report

Attention: Patch coverage is 97.03704% with 4 lines in your changes missing coverage. Please review.

Project coverage is 87.26%. Comparing base (2f4f80f) to head (6e7c017).
Report is 21 commits behind head on master.

Files with missing lines Patch % Lines
devito/ir/equations/algorithms.py 81.81% 1 Missing and 1 partial ⚠️
devito/types/dimension.py 97.82% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2470      +/-   ##
==========================================
+ Coverage   87.22%   87.26%   +0.04%     
==========================================
  Files         238      238              
  Lines       45258    45278      +20     
  Branches     4019     4022       +3     
==========================================
+ Hits        39475    39512      +37     
+ Misses       5103     5085      -18     
- Partials      680      681       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@mloubout mloubout left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some mnior comments

devito/types/dimension.py Outdated Show resolved Hide resolved
devito/types/dimension.py Outdated Show resolved Hide resolved
devito/ir/equations/algorithms.py Outdated Show resolved Hide resolved
devito/types/dimension.py Outdated Show resolved Hide resolved
devito/types/dimension.py Show resolved Hide resolved
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

assert len(_SymbolCache) == init_cache_size + 4
# Delete the grid and check that all symbols are subsequently garbage collected
del grid
for n in (10, 3, 0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 10 init_cache_size ? If so just

del grid
assert len(_SymbolCache) == init_cache_size

And everything was deleted that's that's needed to check

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Init cache size is 11 (and consists of a different set of symbols)

@@ -640,6 +642,11 @@ def _prepare_arguments(self, autotune=None, **kwargs):
for d in reversed(toposort):
args.update(d._arg_values(self._dspace[d], grid, **kwargs))

# Process SubDimensionThicknesses
for p in self.parameters:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if you add is_Input = True to class Thickness, then you don't need this extra loop

for example https://github.com/devitocodes/devito/blob/master/devito/types/constant.py#L42

devito/types/dimension.py Show resolved Hide resolved
devito/types/dimension.py Outdated Show resolved Hide resolved
def _arg_finalize(self, *args, **kwargs):
return {}

def _arg_apply(self, *args, **kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop this, irrelevant

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A no-op _arg_check, _arg_finalize, and _arg_apply are required for _prepare_arguments

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be fixed at the superclass site then, somehow, but fine..


return {self.name: tkn}

def _arg_finalize(self, *args, **kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop this, irrelevant

devito/types/dimension.py Show resolved Hide resolved
devito/types/dimension.py Show resolved Hide resolved
@FabioLuporini FabioLuporini merged commit 2f18ab8 into master Nov 13, 2024
31 checks passed
@FabioLuporini FabioLuporini deleted the tkns branch November 13, 2024 08:50
@FabioLuporini
Copy link
Contributor

Merged, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants